139 research outputs found

    Isoperimetric Partitioning: A New Algorithm for Graph Partitioning

    Full text link
    Temporal structure is skilled, fluent action exists at several nested levels. At the largest scale considered here, short sequences of actions that are planned collectively in prefronatal cortex appear to be queued for performance by a cyclic competitive process that operates in concert with a parallel analog representation that implicitly specifies the relative priority of elements of the sequence. At an intermediate scale, single acts, like reaching to grasp, depend on coordinated scaling of the rates at which many muscles shorten or lengthen in parallel. To ensure success of acts such as catching an approaching ball, such parallel rate scaling, which appears to be one function of the basal ganglia, must be coupled to perceptual variables such as time-to-contact. At a finer scale, within each act, desired rate scaling can be realized only if precisely timed muscle activations first accelerate and then decelerate the limbs, to ensure that muscle length changes do not under- or over- shoot the amounts needed for precise acts. Each context of action may require a different timed muscle activation pattern than similar contexts. Because context differences that require different treatment cannot be known in advance, a formidable adaptive engine-the cerebellum-is needed to amplify differences within, and continuosly search, a vast parallel signal flow, in order to discover contextual "leading indicators" of when to generate distinctive patterns of analog signals. From some parts of the cerebellum, such signals control muscles. But a recent model shows how the lateral cerebellum may serve the competitive queuing system (frontal cortex) as a repository of quickly accessed long-term sequence memories. Thus different parts of the cerebellum may use the same adaptive engine design to serve the lowest and highest of the three levels of temporal structure treated. If so, no one-to-one mapping exists between leveels of temporal structure and major parts of the brain. Finally, recent data cast doubt on network-delay models of cerebellar adaptive timing.National Institute of Mental Health (R01 DC02582

    Combinatorial Continuous Maximal Flows

    Get PDF
    Maximum flow (and minimum cut) algorithms have had a strong impact on computer vision. In particular, graph cuts algorithms provide a mechanism for the discrete optimization of an energy functional which has been used in a variety of applications such as image segmentation, stereo, image stitching and texture synthesis. Algorithms based on the classical formulation of max-flow defined on a graph are known to exhibit metrication artefacts in the solution. Therefore, a recent trend has been to instead employ a spatially continuous maximum flow (or the dual min-cut problem) in these same applications to produce solutions with no metrication errors. However, known fast continuous max-flow algorithms have no stopping criteria or have not been proved to converge. In this work, we revisit the continuous max-flow problem and show that the analogous discrete formulation is different from the classical max-flow problem. We then apply an appropriate combinatorial optimization technique to this combinatorial continuous max-flow CCMF problem to find a null-divergence solution that exhibits no metrication artefacts and may be solved exactly by a fast, efficient algorithm with provable convergence. Finally, by exhibiting the dual problem of our CCMF formulation, we clarify the fact, already proved by Nozawa in the continuous setting, that the max-flow and the total variation problems are not always equivalent.Comment: 26 page

    A geometric multigrid approach to solving the 2D inhomogeneous laplace equation with internal drichlet boundary conditions

    Get PDF
    Journal ArticleThe inhomogeneous Laplace (Poisson) equation with internal Dirichlet boundary conditions has recently appeared in several applications to image processing and analysis. Although these approaches have demonstrated quality results, the computational burden of solution demands an efficient solver. Design of an efficient multigrid solver is difficult for these problems due to unpredictable inhomogeneity in the equation coefficients and internal Dirichlet conditions with arbitrary location and value. We present a geometric multigrid approach to solving these systems designed around weighted prolongation/restriction operators and an appropriate system coarsening. This approach is compared against a modified incomplete Cholesky conjugate gradient solver for a range of image sizes. We note that this approach applies equally well to the anisotropic diffusion problem and offers an alternative method to the classic multigrid approach of Acton [1]

    Anisotropic diffusion using power watersheds

    Get PDF
    International audienceMany computer vision applications such as image filtering, segmentation and stereo-vision can be formulated as optimization problems. Whereas in previous decades continuous domain, iterative procedures were common, recently discrete, convex, globally optimal methods such as graph cuts have received a lot of attention. However not all problems in computer vision are convex, for instance L0 norm optimization such as seen in compressive sensing. Recently, a novel discrete framework encompassing many known segmentation methods was proposed : power watershed. We are interested to explore the possibilities of this minimizer to solve other problems than segmentation, in particular with respect to unusual norms optimization. In this article we reformulate the problem of anisotropic diffusion as an L0 optimization problem, and we show that power watersheds are able to optimize this energy quickly and effectively. This study paves the way for using the power watershed as a useful general-purpose minimizer in many different computer vision contexts

    Dual constrained TV-based regularization on graphs

    Get PDF
    26 pagesInternational audienceAlgorithms based on Total Variation (TV) minimization are prevalent in image processing. They play a key role in a variety of applications such as image denoising, compressive sensing and inverse problems in general. In this work, we extend the TV dual framework that includes Chambolle's and Gilboa-Osher's projection algorithms for TV minimization. We use a flexible graph data representation that allows us to generalize the constraint on the projection variable. We show how this new formulation of the TV problem may be solved by means of fast parallel proximal algorithms. On denoising and deblurring examples, the proposed approach is shown not only to perform better than recent TV-based approaches, but also to perform well on arbitrary graphs instead of regular grids. The proposed method consequently applies to a variety of other inverse problems including image fusion and mesh filtering

    Dual constrained TV-based regularization

    Get PDF
    International audienceAlgorithms based on the minimization of the Total Variation are prevalent in computer vision. They are used in a variety of applications such as image denoising, compressive sensing and inverse problems in general. In this work, we extend the TV dual framework that includes Chambolle's and Gilboa-Osher's projection algorithms for TV minimization in a flexible graph data representation by generalizing the constraint on the projection variable. We show how this new formulation of the TV problem may be solved by means of a fast parallel proximal algorithm, which performs better than the classical TV approach for denoising, and is also applicable to inverse problems such as image deblurring

    Asphalt Assault: A Look at the Urban Heat Island Effect

    Get PDF
    https://digitalcommons.wpi.edu/gps-posters/1620/thumbnail.jp

    Accelerated parallel magnetic resonance imaging reconstruction using joint estimation with a sparse signal model

    Get PDF
    Accelerating magnetic resonance imaging (MRI) by reducing the number of acquired k-space scan lines benefits conventional MRI significantly by decreasing the time subjects remain in the magnet. In this paper, we formulate a novel method for Joint estimation from Undersampled LinEs in Parallel MRI (JULEP) that simultaneously calibrates the GeneRalized Autocalibrating Partially Parallel Acquisitions (GRAPPA) reconstruction kernel and reconstructs the full multi-channel k-space. We employ a joint sparsity signal model for the channel images in conjunction with observation models for both the acquired data and GRAPPA reconstructed k-space. We demonstrate using real MRI data that JULEP outperforms conventional GRAPPA reconstruction at high levels of undersampling, increasing the peak-signal-to-noise ratio by up to 10 dB.National Science Foundation (U.S.) (CAREER Grant 0643836)National Center for Research Resources (U.S.) (P41 RR014075)National Institutes of Health (U.S.) (NIH R01 EB007942)National Institutes of Health (U.S.) (NIH R01 EB006847)Siemens CorporationNational Science Foundation (U.S.). Graduate Research Fellowship Progra

    Sparsity-Promoting Calibration for GRAPPA Accelerated Parallel MRI Reconstruction

    Get PDF
    The amount of calibration data needed to produce images of adequate quality can prevent auto-calibrating parallel imaging reconstruction methods like generalized autocalibrating partially parallel acquisitions (GRAPPA) from achieving a high total acceleration factor. To improve the quality of calibration when the number of auto-calibration signal (ACS) lines is restricted, we propose a sparsity-promoting regularized calibration method that finds a GRAPPA kernel consistent with the ACS fit equations that yields jointly sparse reconstructed coil channel images. Several experiments evaluate the performance of the proposed method relative to unregularized and existing regularized calibration methods for both low-quality and underdetermined fits from the ACS lines. These experiments demonstrate that the proposed method, like other regularization methods, is capable of mitigating noise amplification, and in addition, the proposed method is particularly effective at minimizing coherent aliasing artifacts caused by poor kernel calibration in real data. Using the proposed method, we can increase the total achievable acceleration while reducing degradation of the reconstructed image better than existing regularized calibration methods.National Science Foundation (U.S.) (CAREER Grant 0643836)National Institutes of Health (U.S.) (Grant NIH R01 EB007942)National Institutes of Health (U.S.) (Grant NIH R01 EB006847)National Institutes of Health (U.S.) (Grant NIH P41 RR014075)National Institutes of Health (U.S.) (Grant NIH K01 EB011498)National Institutes of Health (U.S.) (Grant NIH F32 EB015914)National Science Foundation (U.S.). Graduate Research Fellowship Progra
    • …
    corecore